54 research outputs found

    Technique(s) for Spike - Sorting

    Get PDF
    Spike-sorting techniques attempt to classify a series of noisy electrical waveforms according to the identity of the neurons that generated them. Existing techniques perform this classification ignoring several properties of actual neurons that can ultimately improve classification performance. In this chapter, after illustrating the spike-sorting problem with real data, we propose a more realistic spike train generation model. It incorporates both a description of "non trivial" (ie, non Poisson) neuronal discharge statistics and a description of spike waveform dynamics (eg, the events amplitude decays for short inter-spike intervals). We show that this spike train generation model is analogous to a one-dimensional Potts spin glass model. We can therefore use the computational methods which have been developed in fields where Potts models are extensively used. These methods are based on the construction of a Markov Chain in the space of model parameters and spike train configurations, where a configuration is defined by specifying a neuron of origin for each spike. This Markov Chain is built such that its unique stationary density is the posterior density of model parameters and configurations given the observed data. A Monte Carlo simulation of the Markov Chain is then used to estimate the posterior density. The theoretical background on Markov chains is provided and the way to build the transition matrix of the Markov Chain is illustrated with a simple, but realistic, model for data generation . Simulated data are used to illustrate the performance of the method and to show that it can easily cope with neurons generating spikes with highly dynamic waveforms and/or generating strongly overlapping clusters on Wilson plots.Comment: 40 pages, 18 figures. LaTeX source file prepared with LyX. To be published as a chapter of the book "Models and Methods in Neurophysics" edited by D. Hansel and C. Meunie

    Efficient spike-sorting of multi-state neurons using inter-spike intervals information

    Get PDF
    We demonstrate the efficacy of a new spike-sorting method based on a Markov Chain Monte Carlo (MCMC) algorithm by applying it to real data recorded from Purkinje cells (PCs) in young rat cerebellar slices. This algorithm is unique in its capability to estimate and make use of the firing statistics as well as the spike amplitude dynamics of the recorded neurons. PCs exhibit multiple discharge states, giving rise to multimodal interspike interval (ISI) histograms and to correlations between successive ISIs. The amplitude of the spikes generated by a PC in an "active" state decreases, a feature typical of many neurons from both vertebrates and invertebrates. These two features constitute a major and recurrent problem for all the presently available spike-sorting methods. We first show that a Hidden Markov Model with 3 log-Normal states provides a flexible and satisfying description of the complex firing of single PCs. We then incorporate this model into our previous MCMC based spike-sorting algorithm (Pouzat et al, 2004, J. Neurophys. 91, 2910-2928) and test this new algorithm on multi-unit recordings of bursting PCs. We show that our method successfully classifies the bursty spike trains fired by PCs by using an independent single unit recording from a patch-clamp pipette.Comment: 25 pages, to be published in Journal of Neurocience Method

    On Goodness of Fit Tests For Models of Neuronal Spike Trains Considered as Counting Processes

    Get PDF
    After an elementary derivation of the "time transformation", mapping a counting process onto a homogeneous Poisson process with rate one, a brief review of Ogata's goodness of fit tests is presented and a new test, the "Wiener process test", is proposed. This test is based on a straightforward application of Donsker's Theorem to the intervals of time transformed counting processes. The finite sample properties of the test are studied by Monte Carlo simulations. Performances on simulated as well as on real data are presented. It is argued that due to its good finite sample properties, the new test is both a simple and a useful complement to Ogata's tests. Warnings are moreover given against the use of a single goodness of fit test

    SPySort: Neuronal Spike Sorting with Python

    Get PDF
    Extracellular recordings with multi-electrode arrays is one of the basic tools of contemporary neuroscience. These recordings are mostly used to monitor the activities, understood as sequences of emitted action potentials, of many individual neurons. But the raw data produced by extracellular recordings are most commonly a mixture of activities from several neurons. In order to get the activities of the individual contributing neurons, a pre-processing step called spike sorting is required. We present here a pure Python implementation of a well tested spike sorting procedure. The latter was designed in a modular way in order to favour a smooth transition from an interactive sorting, for instance with IPython, to an automatic one. Surprisingly enough - or sadly enough, depending on one's view point -, recoding our now 15 years old procedure into Python was the occasion of major methodological improvements.Comment: Part of the Proceedings of the 7th European Conference on Python in Science (EuroSciPy 2014), Pierre de Buyl and Nelle Varoquaux editors, (2014

    Supplementary Material for: Homogeneity and identity tests for unidimensional Poisson processes with an application to neurophysiological peri-stimulus time histograms–R version

    Get PDF
    R version of the Supplementary material for "Homogeneity and identity tests for unidimensional Poisson processes with an application to neurophysiological peri-stimulus time histograms.

    An algebraic method for eye blink artifacts detection in single channel EEG recordings

    Get PDF
    International audienceSingle channel EEG systems are very useful in EEG based applications where real time processing, low computational complexity and low cumbersomeness are critical constrains. These include brain-computer interface and biofeedback devices and also some clinical applications such as EEG recording on babies or Alzheimer's disease recognition. In this paper we address the problem of eye blink artifacts detection in such systems. We study an algebraic approach based on numerical differentiation, which is recently introduced from operational calculus. The occurrence of an artifact is modeled as an irregularity which appears explicitly in the time (generalized) derivative of the EEG signal as a delay. Manipulating such delay is easy with the operational calculus and it leads to a simple joint detection and localization algorithm. While the algorithm is devised based on continuous-time arguments, the final implementation step is fully realized in a discrete-time context, using very classical discrete-time FIR filters. The proposed approach is compared with three other approaches: (1) the very basic threshold approach, (2) the approach that combines the use of median filter, matched filter and nonlinear energy operator (NEO) and (3) the wavelet based approach. Comparison is done on: (a) the artificially created signal where the eye activity is synthesized from real EEG recordings and (b) the real single channel EEG recordings from 32 different brain locations. Results are presented with Receiver Operating Characteristics curves. The results show that the proposed approach compares to the other approaches better or as good as, while having lower computational complexity with simple real time implementation. Comparison of the results on artificially created and real signal leads to conclusions that with detection techniques based on derivative estimation we are able to detect not only eye blink artifacts, but also any spike shaped artifact, even if it is very low in amplitude

    Making neurophysiological data analysis reproducible. Why and how?

    Get PDF
    Manuscript submitted to "The Journal of Physiology (Paris)". Second version.Reproducible data analysis is an approach aiming at complementing classical printed scientific articles with everything required to independently reproduce the results they present. ''Everything'' covers here: the data, the computer codes and a precise description of how the code was applied to the data. A brief history of this approach is presented first, starting with what economists have been calling replication since the early eighties to end with what is now called reproducible research in computational data analysis oriented fields like statistics and signal processing. Since efficient tools are instrumental for a routine implementation of these approaches, a description of some of the available ones is presented next. A toy example demonstrates then the use of two open source software for reproducible data analysis: the ''Sweave family'' and the org-mode of emacs. The former is bound to R while the latter can be used with R, Matlab, Python and many more ''generalist'' data processing software. Both solutions can be used with Unix-like, Windows and Mac families of operating systems. It is argued that neuroscientists could communicate much more efficiently their results by adopting the reproducible research paradigm from their lab books all the way to their articles, thesis and books

    SIMONE: a realistic neural network simulator to reproduce MEA-based recordings

    Get PDF
    International audienceContemporary multielectrode arrays (MEAs) used to record extracellular activity from neural tissues can deliver data at rates on the order of 100 Mbps. Such rates require efficient data compression and/or preprocessing algorithms implemented on an application specific integrated circuit (ASIC) close to the MEA. We present SIMONE (Statistical sIMulation Of Neuronal networks Engine), a versatile simulation tool whose parameters can be either fixed or defined by a probability distribution. We validated our tool by simulating data recorded from the first olfactory relay of an insect. Different key aspects make this tool suitable for testing the robustness and accuracy of neural signal processing algorithms (such as the detection, alignment, and classification of spikes). For instance, most of the parameters can be defined by a probabilistic distribution, then tens of simulations may be obtained from the same scenario. This is especially useful when validating the robustness of the processing algorithm. Moreover, the number of active cells and the exact firing activity of each one of them is perfectly known, which provides an easy way to test accuracy
    • …
    corecore